Rory Yates (in the linked article below) explains how AI could radically transform the Insurance Industry while warning of the barriers that will thwart that goal, summarised in the quote from The AI Journal at the bottom of this article

Yates lists the following challenges.

"AI needs some ‘hard yards’ put in to become the powerhouse driver of positive change we all know it can be in insurance. It needs the right foundations, and in insurance this means new foundations, of that I have absolutely no doubt.

A new business model

Data fluid: able to treat data as a perishable asset, constantly mining it for insight and acting on it as close to real-time as needed.

Built around a customer and not a policy: For example, operating an “API” first model and assuming this is the way you successfully integrate is not sufficient for orchestrating expanding ecosystems. You need to be able to rapidly adapt to make those partnerships work in the context of a process, efficiency, and customer/employee experience.

Multi-agent & multi-model: Whatever AI you are using today, it will likely be obsolete very soon. The ability to harness AI is also about interoperability and control. Probabilistic models offer us a new way of interacting and shaping outcomes. Agentic AI will take this to another level. Insurers must ensure they have the right kit to move quickly and outpace their competition.

The reality? As tempting as key AI use cases might be, wiring them into legacy systems deployed in most insurers could be a costly mistake. Complex point solutions, welding you to key GenAI models, creates dependencies on specific AI models that aren’t easily changed. Equally, harnessing these services to ensure you don’t fall foul of a hallucination, or ways to control the data can be highly complex.

Let’s not build AI Skyscrapers on bungalow foundations."

Firm foundations and infrastructure

Too many insurers have deployed AI as a patchwork of unconnected point solutions rather than as a strategic enabler within a sound and well-connected infrastructure.


High-quality data is the bedrock of AI success and AI is as subject to ‘Garbage in, Garbage out’ as much as any other technology and process. You will have heard of hallucinations as generative AI sometimes delivers complete rubbish because it has been trained on large data that contains rubbish. Add to this LLMs are contextually blind and have no empathy and you can see where we are going.


The LLMs that have trawled data from public sources will combine true and useful facts with conspiracy theories, lies, and predict cause and effect where there is none. There are tools and processes to help weed these out and we are always warned that professionals should validate outputs for accuracy.


A firm foundation is vital to validating training, measuring output accuracy, and correcting mistakes until constant accuracy to acceptable levels is achieved.
Ensure automated data integrity through external verification - using tests for code and definitive answers for math problems. This process acts as a filter, excluding low-quality inputs. Organizations can replicate this rigour by developing domain-specific verification systems, such as knowledge graphs that map factual relationships and give context to the relationships between people, companies, professions, locations etc (e.g., verifying used vehicle pricing values against established and current research).
Refining models on small and specialised language models using proprietary data offers another route to accuracy but you will also want to ingest external data. Add to that the stream of historical and real-time data coming from the many platforms and applications deployed in an insurer and you can see the need for data curation, categorisation and orchestration.

The Systems Infrastructure Conundrum

Insurers still rely on legacy technology; mainframes and decades-old software to reliably powering transactions and unchanging processes. Global insurers even more so because of M&A activities acquiring different technology stacks in each country.


Those that invested in core systems like Guidewire, Duck Creek and Majesco invariably did so in one line of business,like motor, whilst its home and contents business runs on older technology.  Additionally, Guidewire can hardly be labeled an agile and flexible core system. This has mattered less whilst most policies stay as annually renewed products with little innovation. Things are changing though. New competition, the changing needs of younger generations, and unmet protection gaps (see further reading) put pressure on insurers to be faster to market with personalised service and products.


That opens the potential for new MACH-architected core systems that enable personalisation, support dynamic innovation, and the data orchestration capabilities described above. We see insurers often choosing these platforms for new product launches to add flexibility, fluidity, and the ability to apply the ‘test & learn’ experimentation we have discussed. In alphabetic order, these include: 
 


Data curation

Curating data that is relevant, accurate and as complete as practical is an essential task. 
1.    Which key processes and decisions need improving to achieve strategic goals?
2.    What data is required?
3.    What mix of AI tools will best accomplish desired outcomes?
4.    Configure tools to test, learn, and apply successful outcomes
5.    Build out issue by issue within a longer-term strategic plan
 

The data collected will invariably be from multiple sources, mostly unstructured, and must be categorized to make it AI Ready by creating a Universal Data Layer
•    Named entity recognition.
•    Term matching
•    Contextual awareness
•    Leading to focus on relevant data
•    Real-time data for dynamic decision-making
 

Demanding work augmented with data automation tools such as the Aiimi Insight Engine. This can be leveraged by an insurer’s own data teams and augmented by Aiimi’s own vast team of data scientists, engineers, and architects. Similarly, Palantir is moving into insurance from its own strengths in the military, government, and healthcare markets.


Data curation and categorisation are too important not to collaborate with AI and data technology partners.

That is before you consider data orchestration.
 

Data Orchestration

To become a customer-centric organisation that personalises products and services requires managing the many relations across entities, people, and assets, joining the past and present with the predicted future outcomes. Adding to the complexity, these relationships are spread over multiple core systems of record, policy admin systems, point solutions, and data sources from data warehouses to silos.


Managing and orchestrating data for just one outcome is challenging enough so managing for multiple outcomes increases the complexity geometrically. This will involve a rigorous experimentation and ‘test & learn’ phase in which the accuracy of outcomes is measured until full confidence is achieved in every outcome.

This leads to the world of Agentic AI. Agents within the insurers connecting to and transacting with external agents making decisions to optimise outcomes for both customers and insurers.
Suffice to say that if enterprises have found it a challenge to turn single-use case pilots into successful deployments Agentic AI is a step too far. Accomplish success by applying the steps described above in sole use cases will allow you to move on. The pressure to advance from the traditional policy-centric business model of insurers to become customer-centric will put extreme pressure on legacy-based IT systems which are the underlying infrastructure of most insurers- the other part of the infrastructure challenge.  
 

Current and trending status of AI


In the world of Generative AI and LLMs, the shockwaves that DeepSeek generated explain everything. Investors and highly valued AI vendors from Anthropic and Meta to Mistral and OpenAI were blindsided by the newcomer from China that claimed to deliver more for less. Hardware, software, and cloud infrastructure companies suffered dramatic, plunging stock prices and even cries of ‘foul.’ It is still a Wild West landscape with every week seeing new products, releases, and claims of enabling transformation.


AI is, of course, far more than Generative AI, Large Language Models, and Agentic AI. It has its roots in the middle twentieth century and today many mature AI tools are deployed analysing data and predicting outcomes. The launch of ChatGPT by OpenAI heralded the hyped promise of Generative AI and trained LLMs to automate, reduce costs, improve customer service, complete research, and generate innovative plans. Vendors claim we are approaching the point when it will perform as well as and even better than humans. The point of Artificial General Intelligence (AGI).


Teams of Agents would replace much of human teamwork automatically answering questions, guiding purchase decisions, underwriting risk, settling claims, and changing insurance forever. 
Those who rush in without proper planning and preparation will have found this to be a hypothetical rather than a real world.


On the other hand, progress is real. These new tools are already augmenting humans, and the rate of advance is almost bewildering. DeepSeek was downloaded in unprecedented volumes. Alibaba launched QWQ 32B soon afterwards, and OpenAI responded with 03-mini and Deep Research (does that show some paranoia about DeepSeek?). You need people working full-time tracking the pros and cons of each relative to the problems you wish to solve and opportunities you wish to leverage. More on that below.


It does show how you do need to have a coherent strategy to experiment, learn, and choose optimal use cases that offer you a competitive advantage. Like it or not you are experimenting already.


Insurers are already drawn into experimenting with Generative AI, Agentic AI and LLMs


Many of the point solutions that insurers adopt already embed AI tools in their products.
 

RightIndem embeds AI in its digital claims platform to: 
•    Prompt claimants registering claims via eNOL to fill in missing and vital details
•    Automatically categorising clauses and subclauses to triage claims faster
•    Detecting inconsistencies in claims e, g. described weather with actual weather conditions
•    Validating claims against policy wording, inclusions, and exclusions


Insurers should embrace this development of AI in the context of the entire insurance value chain just like The AA.


Hyperexponential automates underwriting processes allowing underwriters to spend less time on admin processes and more time underwriting. It leverages Generative AI and other tools. Underwriting and Claims are two parts of the value chain that insurers should surely connect more. If you deploy both RightIndem and Hyperexponential you need the tools and support to orchestrate the intere4stections between both apps. An AI and data orchestration layer
 

Cytora digitises risk for the commercial insurer leveraging LLMs and multi-agents to process higher volumes of submissions, at lower marginal cost and greater control. SDome clients will combine hyperexponential and Cytora for risk management. 


This picture expands to all parts of the value chain from product planning, through distribution, to supply chain management, counter fraud, and automating payments. Every time you start a trial with one or more apps/platforms that leverage AI you are experimenting with AI.

Like any project, the same detailed planning must be put in place to maximise successful outcomes.  

1. Executive adoption decisions based on something other than FOMO
2. Compelling use cases – will tackle real problems and/or leverage significant opportunities
3. Honest and complete business case ROI
4. Systems engineering thinking
5. Effective test campaigns  ie test, learn & iterate
6. Users buy-in to vision and the experiments
7. Understanding of the technology's true capabilities & the technology's true limitations
9. A backup plan if the experiment fails
10. Self-reflection over prior failures; what has changed in our approach this time?
11. Have the technology partners you chose ‘eaten their own dog food’ and proved they can deliver the outcomes you desire using the tools they promote?

Thanks offered to Chris Surdak for this advice- Chris has been deeply involved with all aspects of AI since his days at NASA. Yes-  AI, RPA, deep-learning have a long history. 

These are the ‘hard yards’ that Rory Yates refers to.

You can see this with Johnson & Johnson after three years of experimentation with AI, including Generative AI

"Johnson & Johnson has shifted its generative AI strategy away from broad experimentation across the healthcare conglomerate to a more focused approach.

Chief Information Officer Jim Swanson said the move ensures that the company allocates resources only to the highest-value generative AI use cases, while it cuts projects that are redundant or simply not working, or where a technology other than GenAI works better.

Another lesson is that AI augments humans, and does not replace them. That's for a longer-term future. 

“That was a pivot we made after about a year of learning,” Swanson said. “Now we’ve moved from the thousand flowers to a really prioritized focus on GenAI.”

"The “thousand flowers” approach involved a number of use case ideas germinating from across the company, which made their way through a centralized governance board. At one point, employees were pursuing nearly 900 individual use cases, many that were redundant or simply didn’t work, he said. And as the company tracked the broad value of AI, including generative AI, data science and intelligent automation, it found that only 10% to 15% of use cases were driving about 80% of the value, he added.

Now J&J is drilling down into high-value generative AI use cases around drug discovery and supply chains, as well as an internal chatbot to answer questions on company policy.

“We’re prioritizing, we’re scaling, we’re looking at the things that make the most sense,” he said. “That was part of the maturation process we went through.”

WSJ | CIO Journal

In my view a 10% to 15% success ratio is par for the course for any IT projects. That is why it is vital to plan as I've described above and, like J&J, weed out the inviable use cases ruthlessly.  

AI in all its guises is not for the faint-hearted! Even more so with Generative AI, with its well-known vulnerabilities, eg, being forgetful and hallucinating. Agentic AI adds complexity that might be handled in a small MGA or broker environment but with great difficulty in a large carrier.

 

Further Reading

Johnson & Johnson Pivots Its AI Strategy

CIOs on setting AI strategies and demonstrating value from technology investments in 2025- Gartner’s CIO Survey

How AI Could Radically Transform the Insurance Industry